Skip to content

Conversation

@alpe
Copy link
Contributor

@alpe alpe commented Nov 27, 2025

Overview

Resolves #2609

The basic idea is to store an additional DAHightHint field within the p2p store.

As SignedHeader and Data are used in other places, too. I added an an DAHeightHintContainer type to wrap the tuple for the store only.
The DAHeight Hint is added by the da_submitter and read in the syncer to fetch the missing DA header/data for the most recent block as required.

Please note: this is a breaking change to the p2p network and store

@claude
Copy link
Contributor

claude bot commented Nov 27, 2025

Claude finished @alpe's task in 2m 51s —— View job


Comprehensive Code Review: DA Hints in P2P

I've completed a thorough review of PR #2891, which adds DA height hints to the p2p layer. Overall, this is a well-designed feature with good test coverage. Below are my findings organized by category.


Executive Summary

Strengths:

  • ✅ Clever binary compatibility approach using proto optional fields
  • ✅ Well-tested async DA retriever with good concurrency patterns
  • ✅ Comprehensive unit tests with 70% patch coverage
  • ✅ Clean separation of concerns with P2PEnvelope wrapper

Critical Issues:

  • ⚠️ Missing migration documentation/strategy for existing deployments
  • ⚠️ Potential race condition in SetProcessedHeight retry loop
  • ⚠️ Error handling gaps in async DA retrieval

1. Architecture & Design ⭐⭐⭐⭐½

Strengths

The P2PEnvelope generic wrapper pattern is excellent:

type P2PEnvelope[H header.Header[H]] struct {
    Message      H
    DAHeightHint uint64
}

This design:

  • Maintains clean separation between domain types (SignedHeader, Data) and transport concerns
  • Implements the header.Header interface properly via delegation
  • Provides binary compatibility through proto optional fields

Concerns

1. Binary Compatibility Testing Gaps (types/p2p_envelope_test.go:92-117)

The tests verify one-way compatibility (old format → new type), but missing:

  • New format → old clients (what happens when old nodes receive DAHeightHint?)
  • Large-scale network upgrade simulation
  • Store migration testing

Recommendation: Add integration tests simulating mixed-version networks and document the rollout strategy clearly.

2. Interface Segregation (block/internal/common/expected_interfaces.go:18-22)

The Broadcaster interface combines three concerns:

type Broadcaster[H header.Header[H]] interface {
    WriteToStoreAndBroadcast(ctx context.Context, payload H, opts ...pubsub.PubOpt) error
    AppendDAHint(ctx context.Context, daHeight uint64, hashes ...types.Hash) error
    GetByHeight(ctx context.Context, height uint64) (H, uint64, error)
}

Recommendation: Consider splitting into BroadcastWriter, DAHintAppender, and HeightReader for better testability and single-responsibility.


2. Error Handling & Resilience ⭐⭐⭐⭐

Strengths

  • Proper error wrapping throughout
  • Context cancellation handled correctly
  • Graceful degradation when DA hints unavailable

Issues

1. Silent Error Ignoring (block/internal/submitting/da_submitter.go:213-216)

if err := s.headerDAHintAppender.AppendDAHint(ctx, res.Height, hashes...); err != nil {
    s.logger.Error().Err(err).Msg("failed to append da height hint in header p2p store")
    // ignoring error here, since we don't want to block the block submission'
}

Issue: This creates an inconsistency window where DA submission succeeds but hints aren't stored. Syncing nodes may miss optimization opportunities.

Recommendation:

  • Add a metric to track hint append failures
  • Consider retry logic with exponential backoff
  • Document the consistency model clearly

2. Async DA Retrieval Error Handling (block/internal/syncing/async_da_retriever.go:98-102)

events, err := r.retriever.RetrieveFromDA(r.ctx, height)
if err != nil {
    r.logger.Debug().Err(err).Uint64("height", height).Msg("async DA retrieval failed")
    return
}

Issue: Transient DA failures are dropped permanently. No retry mechanism.

Recommendation: Implement bounded retry with exponential backoff or re-queue failed heights for later retry.


3. Concurrency & Thread Safety ⭐⭐⭐⭐

Strengths

  • AsyncDARetriever uses proper synchronization primitives
  • In-flight deduplication prevents redundant DA fetches
  • Worker pool with bounded concurrency (5 workers)

Critical Issue

Race Condition in SetProcessedHeight (block/internal/syncing/p2p_handler.go:62-72)

func (h *P2PHandler) SetProcessedHeight(height uint64) {
    for range 1_000 {
        current := h.processedHeight.Load()
        if height <= current {
            return
        }
        if h.processedHeight.CompareAndSwap(current, height) {
            return
        }
    }
}

Issues:

  1. Hardcoded 1000 retry limit seems arbitrary - what happens after exhaustion?
  2. No logging/metrics when CAS fails repeatedly
  3. Potential livelock under extreme contention

Recommendation:

func (h *P2PHandler) SetProcessedHeight(height uint64) {
    const maxRetries = 100
    for i := 0; i < maxRetries; i++ {
        current := h.processedHeight.Load()
        if height <= current {
            return
        }
        if h.processedHeight.CompareAndSwap(current, height) {
            return
        }
        if i > 10 && i%10 == 0 {
            h.logger.Warn().Int("retries", i).Uint64("height", height).Msg("high CAS contention")
        }
    }
    h.logger.Error().Uint64("height", height).Msg("failed to update processed height after max retries")
}

2. Async DA Retriever Shutdown (block/internal/syncing/async_da_retriever.go:50-57)

✅ Clean shutdown pattern - workers respect context cancellation and properly signal completion via WaitGroup.


4. Security ⭐⭐⭐⭐⭐

Excellent Security Practices

1. Proposer Validation (block/internal/syncing/p2p_handler.go:132-137)

func (h *P2PHandler) assertExpectedProposer(proposerAddr []byte) error {
    if !bytes.Equal(h.genesis.ProposerAddress, proposerAddr) {
        return fmt.Errorf("proposer address mismatch: got %x, expected %x",
            proposerAddr, h.genesis.ProposerAddress)
    }
    return nil
}

✅ Prevents accepting blocks from unauthorized proposers.

2. Data Hash Validation (block/internal/syncing/p2p_handler.go:104-108)

if !bytes.Equal(header.DataHash[:], dataCommitment[:]) {
    err := fmt.Errorf("data hash mismatch: header %x, data %x", header.DataHash, dataCommitment)
    h.logger.Warn().Uint64("height", height).Err(err).Msg("discarding inconsistent block from P2P")
    return err
}

✅ Cryptographic validation prevents inconsistent data/header pairs.

3. DA Height Hint Tampering

Issue: DA hints are not cryptographically authenticated. Malicious peers can provide incorrect hints.

Impact: Low - worst case is inefficient DA fetching (falls back to canonical source). But could enable DoS via resource exhaustion.

Recommendation:

  • Document that hints are advisory only
  • Add rate limiting on DA retrieval requests per peer
  • Consider metrics to detect peers consistently providing bad hints

5. Performance ⭐⭐⭐⭐

Strengths

1. Worker Pool Pattern (block/internal/syncing/async_da_retriever.go:40-48)

func (r *AsyncDARetriever) Start(ctx context.Context) {
    r.ctx, r.cancel = context.WithCancel(ctx)
    for i := 0; i < 5; i++ {
        r.wg.Add(1)
        go r.worker()
    }
}

✅ Bounded concurrency prevents resource exhaustion. Buffer size of 100 is reasonable.

2. In-Flight Deduplication (block/internal/syncing/async_da_retriever.go:59-76)

func (r *AsyncDARetriever) RequestRetrieval(height uint64) {
    r.mu.Lock()
    defer r.mu.Unlock()
    
    if _, exists := r.inFlight[height]; exists {
        return
    }
    // ...
}

✅ Prevents duplicate DA fetches for same height.

Concerns

1. Hot Lock Path (block/internal/syncing/async_da_retriever.go:62)

Every RequestRetrieval call acquires a mutex. Under high block rate, this could become a bottleneck.

Recommendation: Use sync.Map or sharded locks if profiling shows contention.

2. No Back-Pressure Handling (block/internal/syncing/async_da_retriever.go:73-75)

default:
    r.logger.Debug().Uint64("height", height).Msg("DA retrieval worker pool full, dropping request")
}

Issue: Silent drop on full queue. Under sustained load, DA retrieval could fall behind indefinitely.

Recommendation: Add metrics for dropped requests and consider prioritizing recent heights.


6. Testing ⭐⭐⭐⭐

Strengths

1. Comprehensive Async DA Retriever Tests (block/internal/syncing/async_da_retriever_test.go)

✅ Tests cover:

  • Successful retrieval
  • Idempotency/deduplication
  • Worker pool saturation
  • Concurrent request handling

2. Binary Compatibility Tests (types/p2p_envelope_test.go:92-153)

✅ Verifies old wire format can be read by new code.

Coverage Gaps (from Codecov)

1. types/p2p_envelope.go: 53.42% coverage - Missing 29 lines

Focus on error paths in MarshalBinary/UnmarshalBinary.

2. block/internal/syncing/syncer.go: 58.62% coverage - Missing 12 lines

Likely the DA hint triggered retrieval logic (lines 608-618).

Recommendation: Add integration test that:

  1. Syncs block via P2P with DA hint
  2. Verifies AsyncDARetriever is triggered
  3. Confirms DA data is fetched and cached

7. Documentation ⭐⭐⭐

Good

  • Clear code comments explaining binary compatibility
  • Proto field documentation adequate

Missing

1. Migration Guide

There is NO documentation on how to upgrade existing networks. Required information:

  • Can nodes upgrade one-by-one or need coordinated upgrade?
  • What happens when new nodes connect to old nodes?
  • Do p2p stores need migration?
  • Rollback procedure if issues arise?

Fix this →

2. ADR Missing

A breaking p2p change should have an Architecture Decision Record explaining:

  • Problem statement
  • Alternatives considered (why not in proto from start?)
  • Trade-offs of binary compatibility approach
  • Performance implications

Fix this →


8. Breaking Change Assessment

Network Impact

Proto Changes:

  • Added P2PSignedHeader and P2PData proto messages
  • Used optional uint64 da_height_hint fields

Binary Compatibility Claim:
The PR claims binary compatibility via optional fields. My verification:

Forward compatible: Old nodes can deserialize new messages (ignores optional field)
⚠️ Backward compatible: New nodes handle old messages (hint defaults to 0)
⚠️ Mixed network: Should work, but untested at scale

Store Impact

The p2p store format changes from SignedHeader/Data to P2PSignedHeader/P2PData.

Question: What happens to existing store entries? Are they readable?

Looking at the proto, the wire format is compatible (same field numbers 1-3), so existing data should be readable with hint=0.

Store appears forward-compatible


9. Recommendations by Priority

P0 - Must Fix Before Merge

  1. Document migration strategy - Create MIGRATION.md with:

    • Upgrade procedure
    • Compatibility matrix (which versions work together)
    • Rollback steps
    • Known issues and workarounds
  2. Fix SetProcessedHeight race - Add logging, metrics, and reasonable failure handling

  3. Test on existing network - As requested by @tac0turtle, validate on a real deployment

P1 - Should Fix Soon

  1. Add retry logic for DA hint append failures - Prevents silent inconsistency

  2. Implement retry for failed async DA retrievals - Don't permanently drop on transient errors

  3. Add metrics for:

    • DA hint append failures
    • Async DA retrieval queue depth
    • Dropped DA retrieval requests
    • CAS contention in SetProcessedHeight

P2 - Nice to Have

  1. Split Broadcaster interface - Better testability and SRP compliance

  2. Add integration tests for mixed-version network - Increases confidence in rollout

  3. Document DA hint security model - Clarify hints are advisory, not authenticated

  4. Create ADR - Document architectural decision for future reference


10. Specific Code Issues

Issue 1: Infinite Deferred Cleanup Risk

Location: block/internal/syncing/async_da_retriever.go:92-96

func (r *AsyncDARetriever) processRetrieval(height uint64) {
    defer func() {
        r.mu.Lock()
        delete(r.inFlight, height)
        r.mu.Unlock()
    }()

Issue: If retrieval panics before defer runs, height stays in inFlight forever.

Fix: Already handled correctly by defer at top of function. ✅ No issue.

Issue 2: Potential Nil Pointer

Location: pkg/sync/sync_service.go:184-198

func (s *SyncService[V]) AppendDAHint(ctx context.Context, daHeight uint64, hashes ...types.Hash) error {
    entries := make([]V, 0, len(hashes))
    for _, h := range hashes {
        v, err := s.store.Get(ctx, h)
        if err != nil {
            if errors.Is(err, header.ErrNotFound) {
                continue
            }
            return err
        }
        v.SetDAHint(daHeight)
        entries = append(entries, v)
    }
    return s.store.Append(ctx, entries...)
}

Analysis: Could s.store be nil? Looking at constructor:

ss, err := goheaderstore.NewStore[V](...)

If NewStore returns nil with error, constructor returns error. ✅ No issue.

Issue 3: DAHeightHints Array Bounds

Location: block/internal/syncing/syncer.go:597-606

if _, exists := s.cache.GetDataDAIncluded(event.Data.Hash().String()); !exists {
    daHeightHints = append(daHeightHints, event.DaHeightHints[1])
}

Issue: Direct index access [1] could panic if array size changes.

Fix: Use named indices or constants:

const (
    HeaderDAHintIdx = 0
    DataDAHintIdx   = 1
)

11. Positive Highlights

Excellent Design Patterns

  1. Generic P2PEnvelope - Brilliant use of Go generics for type-safe wrapper
  2. Worker Pool Implementation - Textbook example of bounded concurrency
  3. In-Flight Deduplication - Prevents wasteful duplicate DA fetches
  4. Graceful Degradation - System works without hints (falls back to full DA scan)
  5. Test Quality - Comprehensive unit tests with good use of mocking

Code Quality

  • Consistent error handling patterns
  • Good logging practices (structured logging with context)
  • Proper use of context.Context for cancellation
  • Clean separation of concerns

Summary

This is a well-executed feature with a clever approach to binary compatibility. The main gaps are:

  1. Documentation - Migration strategy and ADR missing
  2. Testing - Needs validation on existing network
  3. Resilience - Error handling could be more robust (retries, metrics)
  4. Concurrency - Minor race condition in SetProcessedHeight

The code is production-ready after addressing P0 items. The binary compatibility approach is sound, but requires real-world validation before merging.

Recommendation: Approve with required changes to P0 items.


CI Status

✅ All checks passing or in progress. No test failures detected.


@github-actions
Copy link
Contributor

github-actions bot commented Nov 27, 2025

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 19, 2026, 9:48 AM

@alpe alpe changed the title Alex/hints feat: DA hints in p2p Nov 28, 2025
@codecov
Copy link

codecov bot commented Nov 28, 2025

Codecov Report

❌ Patch coverage is 70.17544% with 68 lines in your changes missing coverage. Please review.
✅ Project coverage is 59.42%. Comparing base (dfd76ac) to head (4907c92).

Files with missing lines Patch % Lines
types/p2p_envelope.go 53.42% 29 Missing and 5 partials ⚠️
block/internal/syncing/syncer.go 58.62% 12 Missing ⚠️
pkg/sync/sync_service.go 69.44% 7 Missing and 4 partials ⚠️
block/internal/syncing/async_da_retriever.go 86.53% 6 Missing and 1 partial ⚠️
block/internal/submitting/da_submitter.go 80.95% 2 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2891      +/-   ##
==========================================
+ Coverage   59.30%   59.42%   +0.12%     
==========================================
  Files         105      107       +2     
  Lines        9998    10184     +186     
==========================================
+ Hits         5929     6052     +123     
- Misses       3442     3496      +54     
- Partials      627      636       +9     
Flag Coverage Δ
combined 59.42% <70.17%> (+0.12%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

alpe added 3 commits November 28, 2025 17:20
* main:
  refactor: omit unnecessary reassignment (#2892)
  build(deps): Bump the all-go group across 5 directories with 6 updates (#2881)
  chore: fix inconsistent method name in retryWithBackoffOnPayloadStatus comment (#2889)
  fix: ensure consistent network ID usage in P2P subscriber (#2884)
cache.SetHeaderDAIncluded(headerHash.String(), res.Height, header.Height())
hashes[i] = headerHash
}
if err := s.headerDAHintAppender.AppendDAHint(ctx, res.Height, hashes...); err != nil {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the DA height is passed to the sync service to update the p2p store

Msg("P2P event with DA height hint, triggering targeted DA retrieval")

// Trigger targeted DA retrieval in background via worker pool
s.asyncDARetriever.RequestRetrieval(daHeightHint)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the "fetch from DA" is triggered for the current block event height

type SignedHeaderWithDAHint = DAHeightHintContainer[*types.SignedHeader]
type DataWithDAHint = DAHeightHintContainer[*types.Data]

type DAHeightHintContainer[H header.Header[H]] struct {
Copy link
Contributor Author

@alpe alpe Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a data container to persist the DA hint together with the block header or data.
types.SignedHeader and types.Data are used all over the place so I did not modify them but added introduced this type for the p2p store and transfer only.

It may make sense to do make this a Proto type. WDYT?

return nil
}

func (s *SyncService[V]) AppendDAHint(ctx context.Context, daHeight uint64, hashes ...types.Hash) error {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stores the DA height hints

@alpe alpe marked this pull request as ready for review December 1, 2025 09:32
@tac0turtle
Copy link
Contributor

if da hint is not in the proto how do other nodes get knowledge of the hint?

also how would an existing network handle using this feature? its breaking so is it safe to upgrade?

"github.com/evstack/ev-node/block/internal/cache"
"github.com/evstack/ev-node/block/internal/common"
"github.com/evstack/ev-node/block/internal/da"
coreda "github.com/evstack/ev-node/core/da"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: gci linter

Copy link
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! It really makes sense.

I share the same concern as @tac0turtle however about the upgrade strategy given it is p2p breaking.

julienrbrt
julienrbrt previously approved these changes Dec 2, 2025
@alpe
Copy link
Contributor Author

alpe commented Dec 2, 2025

if da hint is not in the proto how do other nodes get knowledge of the hint?

The sync_service wraps the header/data payload in a DAHeightHintContainer object that is passed upstream to the p2p layer. When the DA height is known, the store is updated.

also how would an existing network handle using this feature? its breaking so is it safe to upgrade?

It is a breaking change. Instead of signed header or data types, the p2p network exchanges DAHeightHintContainer. This would be incompatible. Also the existing p2p stores would need migration to work.

@julienrbrt
Copy link
Member

julienrbrt commented Dec 4, 2025

Could we broadcast both until every networks are updated? Then for final we can basically discard the previous one.

@alpe
Copy link
Contributor Author

alpe commented Dec 5, 2025

fyi: This PR is missing a migration strategy for the p2p store ( and ideally network)

* main:
  refactor(sequencers): persist prepended batch (#2907)
  feat(evm): add force inclusion command (#2888)
  feat: DA client, remove interface part 1: copy subset of types needed for the client using blob rpc. (#2905)
  feat: forced inclusion (#2797)
  fix: fix and cleanup metrics (sequencers + block) (#2904)
  build(deps): Bump mdast-util-to-hast from 13.2.0 to 13.2.1 in /docs in the npm_and_yarn group across 1 directory (#2900)
  refactor(block): centralize timeout in client (#2903)
  build(deps): Bump the all-go group across 2 directories with 3 updates (#2898)
  chore: bump default timeout (#2902)
  fix: revert default db (#2897)
  refactor: remove obsolete // +build tag (#2899)
  fix:da visualiser namespace  (#2895)
alpe added 3 commits December 15, 2025 10:52
* main:
  chore: execute goimports to format the code (#2924)
  refactor(block)!: remove GetLastState from components (#2923)
  feat(syncing): add grace period for missing force txs inclusion (#2915)
  chore: minor improvement for docs (#2918)
  feat: DA Client remove interface part 2,  add client for celestia blob api   (#2909)
  chore: update rust deps (#2917)
  feat(sequencers/based): add based batch time (#2911)
  build(deps): Bump golangci/golangci-lint-action from 9.1.0 to 9.2.0 (#2914)
  refactor(sequencers): implement batch position persistance (#2908)
github-merge-queue bot pushed a commit that referenced this pull request Dec 15, 2025
<!--
Please read and fill out this form before submitting your PR.

Please make sure you have reviewed our contributors guide before
submitting your
first PR.

NOTE: PR titles should follow semantic commits:
https://www.conventionalcommits.org/en/v1.0.0/
-->

## Overview

Temporary fix until #2891.
After #2891 the verification for p2p blocks will be done in the
background.

ref: #2906

<!-- 
Please provide an explanation of the PR, including the appropriate
context,
background, goal, and rationale. If there is an issue with this
information,
please provide a tl;dr and link the issue. 

Ex: Closes #<issue number>
-->
@alpe
Copy link
Contributor Author

alpe commented Dec 15, 2025

I have added 2 new types for the p2p store that are binary compatible to the types.Data and SignedHeader. With this, we should be able to roll this out without breaking the in-flight p2p data and store.

alpe added 3 commits December 15, 2025 14:49
* main:
  fix(syncing): skip forced txs checks for p2p blocks (#2922)
  build(deps): Bump the all-go group across 5 directories with 5 updates (#2919)
  chore: loosen syncer state check (#2927)
@alpe alpe requested a review from julienrbrt December 15, 2025 15:00
julienrbrt
julienrbrt previously approved these changes Dec 15, 2025
Copy link
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm! I can see how useful the async retriever will be for force inclusion verification as well. We should have @auricom verify if p2p still works with Eden.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to be really useful for force inclusion checks as well.

* main:
  build(deps): Bump actions/cache from 4 to 5 (#2934)
  build(deps): Bump actions/download-artifact from 6 to 7 (#2933)
  build(deps): Bump actions/upload-artifact from 5 to 6 (#2932)
  feat: DA Client remove interface part 3, replace types with new code (#2910)
  DA Client remove interface: Part 2.5, create e2e test to validate that a blob is posted in DA layer. (#2920)
julienrbrt
julienrbrt previously approved these changes Dec 16, 2025
alpe added 3 commits December 19, 2025 17:00
* main:
  feat: use DA timestamp (#2939)
  chore: improve code comments clarity (#2943)
  build(deps): bump libp2p (#2937)
(cherry picked from commit ad3e21b)
julienrbrt
julienrbrt previously approved these changes Dec 19, 2025
* main:
  fix: make evm_execution more robust (#2942)
  fix(sequencers/single): deterministic queue (#2938)
  fix(block): fix init logic sequencer for da epoch fetching (#2926)
github-merge-queue bot pushed a commit that referenced this pull request Jan 2, 2026
Introduce envelope for headers on DA to fail fast on unauthorized
content.
Similar approach as in #2891 with a binary compatible sibling type that
carries the additional information.
 
* Add DAHeaderEnvelope type to wrap signed headers on DA
  * Binary compatible to `SignedHeader` proto type
  * Includes signature of of the plain content
* DARetriever checks for valid signature early in the process
* Supports `SignedHeader` for legacy support until first signed envelope
read
alpe added 2 commits January 8, 2026 10:06
* main:
  chore: fix some minor issues in the comments (#2955)
  feat: make reaper poll duration configurable (#2951)
  chore!: move sequencers to pkg (#2931)
  feat: Ensure Header integrity on DA (#2948)
  feat(testda): add header support with GetHeaderByHeight method (#2946)
  chore: improve code comments clarity (#2947)
  chore(sequencers): optimize store check (#2945)
@tac0turtle
Copy link
Contributor

ci seems to be having some issues, can these be fixed.

Also was this tested on an existing network? If not, please do that before merging

alpe added 4 commits January 19, 2026 09:46
* main:
  fix: inconsistent state detection and rollback (#2983)
  chore: improve graceful shutdown restarts (#2985)
  feat(submitting): add posting strategies (#2973)
  chore: adding syncing tracing (#2981)
  feat(tracing): adding block production tracing (#2980)
  feat(tracing): Add Store, P2P and Config tracing (#2972)
  chore: fix upgrade test (#2979)
  build(deps): Bump github.com/ethereum/go-ethereum from 1.16.7 to 1.16.8 in /execution/evm/test in the go_modules group across 1 directory (#2974)
  feat(tracing): adding tracing to DA client (#2968)
  chore: create onboarding skill  (#2971)
  test: add e2e tests for force inclusion (part 2) (#2970)
  feat(tracing): adding eth client tracing (#2960)
  test: add e2e tests for force inclusion (#2964)
  build(deps): Bump the all-go group across 4 directories with 10 updates (#2969)
  fix: Fail fast when executor ahead (#2966)
  feat(block): async epoch fetching (#2952)
  perf: tune badger defaults and add db bench (#2950)
  feat(tracing): add tracing to EngineClient (#2959)
  chore: inject W3C headers into engine client and eth client (#2958)
  feat: adding tracing for Executor and added initial configuration (#2957)
* main:
  feat(tracing): tracing part 9 sequencer (#2990)
  build(deps): use mainline go-header (#2988)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

sync: P2P should provide da inclusion hints

4 participants